视频异常检测(VAD)是计算机视觉中的重要主题。本文通过最新的自我监督学习进展的激励,通过解决直观而又具有挑战性的借口任务,即时空拼图拼图来解决VAD,该任务是一个多标签的精细粒度分类问题。我们的方法比现有作品具有几个优点:1)时空拼图难题是根据空间和时间维度分离的,分别捕获了高度歧视性的外观和运动特征; 2)完全排列用于提供涵盖各种难度水平的丰富拼图难题,从而使网络能够区分正常事件和异常事件之间的细微时空差异; 3)借口任务以端到端的方式解决,而无需依赖任何预训练的模型。我们的方法优于三个公共基准的最先进的方法。尤其是在上海校园中,其结果优于重建和基于预测的方法。
translated by 谷歌翻译
溶剂基碳捕获系统(CCSS)中的CO2捕获效率尺寸依赖性取决于气体溶剂界面(IA),使IA在CCS设计中的基础攻击最大化。虽然可以通过计算流体动力学(CFD)仿真估计与特定CCS设计的IA,但是使用CFD导出与许多CCS设计相关的IAS,这是昂贵的。幸运的是,以前的工作(如深液)(DF)(Kim等人,2019)表明,通过用神经网络(NN)代理商兑忠实地模仿CFD仿真过程的CFD模拟器来实现大型仿真加速度。这提高了对CFD模拟器的快速,准确更换的可能性,从而有效地逼近CCS设计优化所需的IAS。因此,在这里,我们建立在DF方法中,以开发成功应用于我们复杂的碳捕获CFD模拟的代理。我们优化的DF样式替代商会产生大型加速(4000X),同时获得位于训练配置范围内的未见CCS配置中的IA相对误差低至4%。这提示了NN代理人的CCS设计优化问题的承诺。尽管如此,DF对CCS设计具有固有的局限性(例如,培训模型的有限可转换性至新CCS填料)。我们与思想结束以解决这些挑战。
translated by 谷歌翻译
非法车辆停车是世界上主要城市面临的常见城市问题,因为它导致空气污染和交通事故。政府高度依赖于积极的人类努力,以检测非法停车活动。然而,这种方法对于覆盖一个大城市来说,这一方法非常无效,因为警方必须巡逻整个城市道路。 Mobikike的大规模和高质量的共享自行车轨迹为我们提供了一个独特的机会,可以设计无处不在的非法停车检测方法,因为大多数非法停车处发生在路边,对自行车用户产生重大影响。检测结果可以指导巡逻计划,即将巡逻警察发送到具有更高的非法停车风险的地区,进一步提高巡逻效率。灵感来自这个想法,在建议的框架中采用了三个主要组件:1)〜{\ em轨迹预处理},它过滤了异常GPS点,执行Map-匹配,并构建轨迹索引; 2)〜{\ em非法停车检测},模拟正常轨迹,从评估轨迹提取特征,并利用基于试验的方法来发现非法停车事件; 3)〜{\ em巡逻计划},它利用检测结果作为参考上下文,并将调度任务作为一种多智能体增强学习问题来指导巡逻警察。最后,提出了广泛的实验以验证非法停车检测的有效性,以及巡逻效率的提高。
translated by 谷歌翻译
响应于不同规格的产品的不断变化的原料供应和市场需求,需要在时变的操作条件和目标(例如,设定值)的过程中运行,以改善过程经济,与预定的传统过程操作相比均衡。本文开发了一种用于非线性化学过程的基于收缩理论的控制方法,以实现时变参考跟踪。这种方法利用神经网络的通用近似特征,采用离散时间收缩分析和控制。它涉及训练神经网络以学习嵌入基于收缩的控制器中的收缩度量和差分反馈增益。第二个,单独的神经网络也结合到控制循环中,以在线学习不确定系统模型参数。得到的控制方案能够实现有效的偏移跟踪时变的参考,其具有全范围的模型不确定性,而无需控制器结构作为参考变化重新设计。这是一种强大的方法,可以在工艺模型中处理流程模型中的有界参数不确定性,这些方法通常遇到工业(化学)过程中。这种方法还确保在线同时学习和控制期间的过程稳定性。提供模拟实施例以说明上述方法。
translated by 谷歌翻译
The development of social media user stance detection and bot detection methods rely heavily on large-scale and high-quality benchmarks. However, in addition to low annotation quality, existing benchmarks generally have incomplete user relationships, suppressing graph-based account detection research. To address these issues, we propose a Multi-Relational Graph-Based Twitter Account Detection Benchmark (MGTAB), the first standardized graph-based benchmark for account detection. To our knowledge, MGTAB was built based on the largest original data in the field, with over 1.55 million users and 130 million tweets. MGTAB contains 10,199 expert-annotated users and 7 types of relationships, ensuring high-quality annotation and diversified relations. In MGTAB, we extracted the 20 user property features with the greatest information gain and user tweet features as the user features. In addition, we performed a thorough evaluation of MGTAB and other public datasets. Our experiments found that graph-based approaches are generally more effective than feature-based approaches and perform better when introducing multiple relations. By analyzing experiment results, we identify effective approaches for account detection and provide potential future research directions in this field. Our benchmark and standardized evaluation procedures are freely available at: https://github.com/GraphDetec/MGTAB.
translated by 谷歌翻译
An increasing number of public datasets have shown a marked clinical impact on assessing anatomical structures. However, each of the datasets is small, partially labeled, and rarely investigates severe tumor subjects. Moreover, current models are limited to segmenting specific organs/tumors, which can not be extended to novel domains and classes. To tackle these limitations, we introduce embedding learned from Contrastive Language-Image Pre-training (CLIP) to segmentation models, dubbed the CLIP-Driven Universal Model. The Universal Model can better segment 25 organs and 6 types of tumors by exploiting the semantic relationship between abdominal structures. The model is developed from an assembly of 14 datasets with 3,410 CT scans and evaluated on 6,162 external CT scans from 3 datasets. We rank first on the public leaderboard of the Medical Segmentation Decathlon (MSD) and achieve the state-of-the-art results on Beyond The Cranial Vault (BTCV). Compared with dataset-specific models, the Universal Model is computationally more efficient (6x faster), generalizes better to CT scans from varying sites, and shows stronger transfer learning performance on novel tasks. The design of CLIP embedding enables the Universal Model to be easily extended to new classes without catastrophically forgetting the previously learned classes.
translated by 谷歌翻译
Due to their ability to offer more comprehensive information than data from a single view, multi-view (multi-source, multi-modal, multi-perspective, etc.) data are being used more frequently in remote sensing tasks. However, as the number of views grows, the issue of data quality becomes more apparent, limiting the potential benefits of multi-view data. Although recent deep neural network (DNN) based models can learn the weight of data adaptively, a lack of research on explicitly quantifying the data quality of each view when fusing them renders these models inexplicable, performing unsatisfactorily and inflexible in downstream remote sensing tasks. To fill this gap, in this paper, evidential deep learning is introduced to the task of aerial-ground dual-view remote sensing scene classification to model the credibility of each view. Specifically, the theory of evidence is used to calculate an uncertainty value which describes the decision-making risk of each view. Based on this uncertainty, a novel decision-level fusion strategy is proposed to ensure that the view with lower risk obtains more weight, making the classification more credible. On two well-known, publicly available datasets of aerial-ground dual-view remote sensing images, the proposed approach achieves state-of-the-art results, demonstrating its effectiveness. The code and datasets of this article are available at the following address: https://github.com/gaopiaoliang/Evidential.
translated by 谷歌翻译
In this tutorial paper, we look into the evolution and prospect of network architecture and propose a novel conceptual architecture for the 6th generation (6G) networks. The proposed architecture has two key elements, i.e., holistic network virtualization and pervasive artificial intelligence (AI). The holistic network virtualization consists of network slicing and digital twin, from the aspects of service provision and service demand, respectively, to incorporate service-centric and user-centric networking. The pervasive network intelligence integrates AI into future networks from the perspectives of networking for AI and AI for networking, respectively. Building on holistic network virtualization and pervasive network intelligence, the proposed architecture can facilitate three types of interplay, i.e., the interplay between digital twin and network slicing paradigms, between model-driven and data-driven methods for network management, and between virtualization and AI, to maximize the flexibility, scalability, adaptivity, and intelligence for 6G networks. We also identify challenges and open issues related to the proposed architecture. By providing our vision, we aim to inspire further discussions and developments on the potential architecture of 6G.
translated by 谷歌翻译
As natural language processing (NLP) for gender bias becomes a significant interdisciplinary topic, the prevalent data-driven techniques such as large-scale language models suffer from data inadequacy and biased corpus, especially for languages with insufficient resources such as Chinese. To this end, we propose a Chinese cOrpus foR Gender bIas Probing and Mitigation CORGI-PM, which contains 32.9k sentences with high-quality labels derived by following an annotation scheme specifically developed for gender bias in the Chinese context. Moreover, we address three challenges for automatic textual gender bias mitigation, which requires the models to detect, classify, and mitigate textual gender bias. We also conduct experiments with state-of-the-art language models to provide baselines. To our best knowledge, CORGI-PM is the first sentence-level Chinese corpus for gender bias probing and mitigation.
translated by 谷歌翻译
Considering the computation complexity, we propose a Guided Hybrid Quantization with One-to-one Self-Teaching (GHOST}) framework. More concretely, we first design a structure called guided quantization self-distillation (GQSD), which is an innovative idea for realizing lightweight through the synergy of quantization and distillation. The training process of the quantization model is guided by its full-precision model, which is time-saving and cost-saving without preparing a huge pre-trained model in advance. Second, we put forward a hybrid quantization (HQ) module to obtain the optimal bit width automatically under a constrained condition where a threshold for distribution distance between the center and samples is applied in the weight value search space. Third, in order to improve information transformation, we propose a one-to-one self-teaching (OST) module to give the student network a ability of self-judgment. A switch control machine (SCM) builds a bridge between the student network and teacher network in the same location to help the teacher to reduce wrong guidance and impart vital knowledge to the student. This distillation method allows a model to learn from itself and gain substantial improvement without any additional supervision. Extensive experiments on a multimodal dataset (VEDAI) and single-modality datasets (DOTA, NWPU, and DIOR) show that object detection based on GHOST outperforms the existing detectors. The tiny parameters (<9.7 MB) and Bit-Operations (BOPs) (<2158 G) compared with any remote sensing-based, lightweight or distillation-based algorithms demonstrate the superiority in the lightweight design domain. Our code and model will be released at https://github.com/icey-zhang/GHOST.
translated by 谷歌翻译